fix: default OpenAI provider for /openai/v1/responses WS path (#2995)#3006
Closed
thiscantbeserious wants to merge 1 commit intomaximhq:mainfrom
Closed
fix: default OpenAI provider for /openai/v1/responses WS path (#2995)#3006thiscantbeserious wants to merge 1 commit intomaximhq:mainfrom
thiscantbeserious wants to merge 1 commit intomaximhq:mainfrom
Conversation
…q#2995) The WebSocket responses handler called ParseModelString with an empty default provider at both call sites (handleResponseCreate line 177, convertEventToRequest line 422). For a bare model string like "gpt-4o" this returned provider="" which the guard treated as a fatal parse failure, producing a 400 error on /openai/v1/responses. The fix threads a defaultProvider value inferred from the upgrade request path through handleUpgrade -> eventLoop -> handleResponseCreate -> convertEventToRequest. A new helper inferDefaultProviderFromPath returns schemas.OpenAI for paths starting with /openai/ and an empty provider for all other paths (including the unified /v1/responses, which is multi-provider by design and intentionally requires an explicit provider/model prefix). Both ParseModelString call sites now receive the correct default, so bare model strings are accepted on the integration-scoped path without affecting the unified path behavior. Tests added for inferDefaultProviderFromPath (integration and unified paths) and convertEventToRequest (bare model accepted on integration path, rejected on unified path, prefixed models work on both paths, Anthropic-prefixed model works on unified path). Closes maximhq#2995
Contributor
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
22 tasks
thiscantbeserious
added a commit
to thiscantbeserious/bifrost
that referenced
this pull request
Apr 24, 2026
…ximhq#3006) This fix for maximhq#2995 was originally included in this feature branch as a side effect. It has been extracted into a focused PR maximhq#3006 against main that implements the narrower path-sniff approach requested by the maintainer on maximhq#2959 (only the integration path defaults to OpenAI, the unified path stays strict). Restoring the empty default at both call sites here so this branch no longer overlaps with maximhq#3006.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
The WebSocket responses handler rejected bare model strings (e.g.
gpt-4o) on the/openai/v1/responsesintegration path with a 400 "failed to parse model string" error. This PR fixes the twoParseModelStringcall sites so the integration path infersopenaias the default provider from the request URL, matching the behavior of the HTTP path.Changes
inferDefaultProviderFromPath(path string) schemas.ModelProviderhelper inwsresponses.go. Returnsschemas.OpenAIfor paths starting with/openai/, and an empty provider for all other paths (including the unified/v1/responses, which is multi-provider by design and intentionally requires an explicitprovider/modelprefix).defaultProvider schemas.ModelProviderparameter fromhandleUpgrade(wherectx.Path()is available) througheventLoopandhandleResponseCreatetoconvertEventToRequest. BothParseModelStringcall sites (lines 177 and 422 onorigin/main) now receive the correct default.wsresponses.go, requires no new types, and reads cleanly at the call site. The session object is not the right place for request-scoped routing data. The parameter threading is shallow (3 levels).Type of change
Affected areas
How to test
Run the handler unit tests, which cover the new helper and the parse behavior on both paths:
Expected output:
Live reproduction (requires a running Bifrost instance with an OpenAI provider configured and
allow_direct_keys: true):Screenshots/Recordings
N/A
Breaking changes
Related issues
Closes #2995. Extracted from #2775.
Security considerations
The path-sniff only changes which default provider value is passed to
ParseModelString. It does not affect auth, key selection, or any other security-relevant logic. The unified/v1/responsespath is unchanged.Checklist
docs/contributing/README.mdand followed the guidelines